This is a list of important publications in computer science, organized by field.
Some reasons why a particular publication might be regarded as important:
Description: This paper discusses whether machines can think and suggested the Turing test as a method for checking it.
Description: This summer research proposal inaugurated and defined the field. It contains the first use of the term artificial intelligence and this succinct description of the philosophical foundation of the field: "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." (See philosophy of AI) The proposal invited researchers to the Dartmouth conference, which is widely considered the "birth of AI". (See history of AI.)
Description: The seminal paper published in 1965 provides details on the mathematics of fuzzy set theory.
Description: This book introduced Bayesian methods to AI.
Description: The standard textbook in Artificial Intelligence. The book web site lists over 1100 colleges and universities in 102 countries using it.
Description: The first paper written on machine learning. Emphasized the importance of training sequences, and the use of parts of previous solutions to problems in constructing trial solutions to new problems.
Description: This paper created Algorithmic learning theory.
Description: Computational learning theory, VC theory, statistical uniform convergence and the VC dimension.
Description: The Probably approximately correct learning (PAC learning) framework.
Description: Development of Backpropagation algorithm for artificial neural networks. Note that the algorithm was first described by Paul Werbos in 1974.
Description: Decision Trees are a common learning algorithm and a decision representation tool. Development of decision trees was done by many researchers in many areas, even before this paper. Though this paper is one of the most influential in the field.
Description: One of the papers that started the field of on-line learning. In this learning setting, a learner receives a sequence of examples, making predictions after each one, and receiving feedback after each prediction. Research in this area is remarkable because (1) the algorithms and proofs tend to be very simple and beautiful, and (2) the model makes no statistical assumptions about the data. In other words, the data need not be random (as in nearly all other learning models), but can be chosen arbitrarily by "nature" or even an adversary. Specifically, this paper introduced the winnow algorithm.
Description: The Temporal difference method for reinforcement learning.
Description: The complete characterization of PAC learnability using the VC dimension.
Description: Proving negative results for PAC learning.
Description: Proving that weak and strong learnability are equivalent in the noise free PAC framework. The proof was done by introducing the boosting method.
Description: Proving possibility and impossibility result in the malicious errors framework.
Description: This paper presented support vector machines, a practical and popular machine learning algorithm. Support vector machines utilize the kernel trick, a generally used method.
Description: The first application of supervised learning to gene expression data, in particular Support Vector Machines. The method is now standard, and the paper one of the most cited in the area.
Description: Bottom up parsing for deterministic context-free languages from which later the LALR approach of Yacc developed.
Description: About grammar attribution, the base for yacc's s-attributed and zyacc's LR-attributed approach.
Description: From the abstract: "The global data relationships in a program can be exposed and codified by the static analysis methods described in this paper. A procedure is given which determines all the definitions which can possibly reach each node of the control flow graph of the program and all the definitions that are live on each edge of the graph."
Description: Formalized the concept of data-flow analysis as fixpoint computation over lattices, and showed that most static analyses used for program optimization can be uniformly expressed within this framework.
Description: Yacc is a tool that made compiler writing much easier.
Description: The gprof profiler
Description: This book became a classic in compiler writing. It is also known as the Dragon book, after the (red) dragon that appears on its cover.
Description: The Colossus machines were early computing devices used by British codebreakers to break German messages encrypted with the Lorenz Cipher during World War II. Colossus was an early binary electronic digital computer. The design of Colossus was later described in the referenced paper.
Description: It contains the first published description of the logical design of a computer using the stored-program concept, which has come to be known as the von Neumann architecture.
Description: The IBM System/360 (S/360) is a mainframe computer system family announced by IBM on April 7, 1964. It was the first family of computers making a clear distinction between architecture and implementation.
Description: The reduced instruction set computer( RISC) CPU design philosophy. The RISC is a CPU design philosophy that favors a reduced set of simpler instructions.
Description:
Description: The Cray-1 was a supercomputer designed by a team including Seymour Cray for Cray Research. The first Cray-1 system was installed at Los Alamos National Laboratory in 1976, and it went on to become one of the best known and most successful supercomputers in history.
Description: The Amdahl's Law.
Description: This paper discusses the concept of RAID disks, outlines the different levels of RAID, and the benefits of each level. It is a good paper for discussing issues of reliability and fault tolerance of computer systems, and the cost of providing such fault-tolerance.
Description: This paper argues that the approach taken to improving the performance of processors by adding multiple instruction issue and out-of-order execution cannot continue to provide speedups indefinitely. It lays out the case for making single chip processors that contain multiple "cores". With the mainstream introduction of multicore processors by Intel in 2005, and their subsequent domination of the market, this paper was shown to be prescient.
Description: The Academy of Motion Picture Arts and Sciences cited this paper as a "milestone in computer graphics".
Description: A correlation method based upon the inverse Fourier transform
Description: A method for estimating the image motion of world points between 2 frames of a video sequence.
Description: This paper provides efficient technique for image registration
Description: A technique for image encoding using local operators of many scales.
Description: An interactive variational technique for image segmentation and visual tracking.
Description: A technique for visual tracking
Description: A technique (scale-invariant feature transform) for robust feature description
Topics covered: concurrent computing, parallel computing, and distributed computing.
Description: This paper introduced the relational model for databases. This model became the number one model.
Description: This paper introduced the B-Trees data structure. This model became the number one model.
Description: Completeness of Data Base Sublanguages
Description: This paper introduced the entity-relationship diagram(ERD) method of database design.
Description: This paper introduced the SQL language.
Description: This paper defined the concepts of transaction, consistency and schedule. It also argued that a transaction needs to lock a logical rather than a physical subset of the database.
Description: Association rules, a very common method for data mining.
Description: Perhaps the first book on the history of computation.
edited by:
Description: Several chapters by pioneers of computing.
Description: Presented the vector space model.
Description: Presented the inverted index
Topics covered: cryptography and computer security, computer networks and the Internet.
Description: This paper discuss time-sharing as a method of sharing computer resource. This idea changed the interaction with computer systems.
Description: The beginning of cache. For more information see SIGOPS Hall of Fame.
Description: The classic paper on Multics, the most ambitious operating system in the early history of computing. Difficult reading, but it describes the implications of trying to build a system that takes information sharing to its logical extreme. Most operating systems since Multics have incorporated a subset of its facilities.
Description: This paper addresses issues in constraining the flow of information from untrusted programs. It discusses covert channels, but more importantly it addresses the difficulty in obtaining full confinement without making the program itself effectively unusable. The ideas are important when trying to understand containment of malicious code, as well as aspects of trusted computing.
Description: The Unix operating system and its principles were described in this paper. The main importance is not of the paper but of the operating system, which had tremendous effect on operating system and computer technology.
Description: This paper describes the consistency mechanism known as quorum consensus. It is a good example of algorithms that provide a continuous set of options between two alternatives (in this case, between the read-one write-all, and the write-one read-all consistency methods). There have been many variations and improvements by researchers in the years that followed, and it is one of the consistency algorithms that should be understood by all. The options available by choosing different size quorums provide a useful structure for discussing of the core requirements for consistency in distributed systems.
Description: This is the classic paper on synchronization techniques, including both alternate approaches and pitfalls.
Description: Algorithms for coscheduling of related processes were given
Description: The file system of UNIX. One of the first papers discussing how to manage disk storage for high-performance file systems. Most file-system research since this paper has been influenced by it, and most high-performance file systems of the last 20 years incorporate techniques from this paper.
Description: Log-structured file system.
Description: This is a good paper discussing one particular microkernel architecture and contrasting it with monolithic kernel design. Mach underlies Mac OS X, and its layered architecture had a significant impact on the design of the Windows NT kernel and modern microkernels like L4. In addition, its memory-mapped files feature was added to many monolithic kernels.
Description: The paper was the first production-quality implementation of that idea which spawned much additional discussion of the viability and short-comings of log-structured filesystems. While "The Design and Implementation of a Log-Structured File System" was certainly the first, this one was important in bringing the research idea to a usable system.
Description: A new way of maintaining filesystem consistency.
Description: This paper describes the design and implementation of the first FORTRAN compiler by the IBM team. Fortran is a general-purpose, procedural, imperative programming language that is especially suited to numeric computation and scientific computing.
Description: This paper introduced LISP, the first functional programming language, which was used heavily in many areas of computer science, especially in AI. LISP also has powerful features for manipulating LISP programs within the language.
Description: Algol 60 introduced block structure.
Description: Pascal introduced good programming practices using structured programming and data structuring.
Description: This seminal paper proposed an ideal language ISWIM, which without being ever implemented influenced the whole later development.
Description: This series of papers and reports first defined the influential Scheme programming language and questioned the prevailing practices in programming language design, employing lambda calculus extensively to model programming language concepts and guide efficient implementation without sacrificing expressive power.
Description: This textbook explains core computer programming concepts, and is widely considered a classic text in computer science.
Description: Co-authored by the man who designed the C programming language, the first edition of this book served for many years as the language's de facto standard. As such, the book is regarded by many to be the authoritative reference on C.
Description: Written by the man who designed the C++ programming language, the first edition of this book served for many years as the language's de facto standard until the publication of the ISO/IEC 14882:1998: Programming Language C++ standard on 1 September 1998.
Booth, T. L. (1969). "Probabilistic representation of formal languages". IEEE Conference Record of the 1969 Tenth Annual Symposium on Switching and Automata Theory. pp. 74–81. Contains the first presentation of stochastic context-free grammars.
Koskenniemi, Kimmo (1983), Two-level morphology: A general computational model of word-form recognition and production, Department of General Linguistics, University of Helsinki, http://www.ling.helsinki.fi/~koskenni/doc/Two-LevelMorphology.pdf The first published description of computational morphology using finite state transducers. (Kaplan and Kay had previously done work in this field and presented this at a conference; the linguist Johnson had remarked the possibility in 1972, but not produced any implementation.)
Rabiner, Lawrence R. (1989). "A tutorial on hidden Markov models and selected applications in speech recognition". Proceedings of the IEEE 77 (2): 257–286. An overview of hidden Markov models geared toward speech recognition and other NLP fields, describing the Viterbi and forward-backward algorithms.
Brill, Eric (1995). "Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging". Computational Linguistics 21 (4): 543–566. Describes a now commonly-used POS tagger based on transformation-based learning.
Manning, Christopher D.; Schütze, Hinrich (1999), Foundation of Statistical Natural Language Processing, MIT Press Textbook on statistical and probabilistic methods in NLP.
Frost, Richard A. (2006). "Realization of Natural-Language Interfaces Using Lazy Functional Programming". ACM Computing Surveys 38 (4). http://cs.uwindsor.ca/~richard/PUBLICATIONS/NLI_LFP_SURVEY_DRAFT.pdf. This survey documents relatively less researched importance of lazy functional programming languages (i.e. Haskell) to construct Natural Language Processors and to accommodated many linguistic theories.
Description: Conference of leading figures in software field circa 1968
The paper defined the field of Software engineering
Description: Don't use goto – the beginning of structured programming.
Description: The importance of modularization and information hiding. Note that information hiding was first presented in a different paper of the same author – "Information Distributions Aspects of Design Methodology", Proceedings of IFIP Congress '71, 1971, Booklet TA-3, pp. 26–30
Description: The beginning of Object-oriented programming. This paper argued that programs should be decomposed to independent components with small and simple interfaces. They also argued that objects should have both data and related methods.
Description: software specification.
Description: Seminal paper on Structured Design, data flow diagram, coupling, and cohesion.
Description: A lovely story of how large software projects can go right, and then wrong, and then right again, told with humility and humor. Illustrates the "second-system effect" and the importance of simplicity.
Description: Throwing more people at the task will not speed its completion...
Description: We will keep having problems with software...
Description: Open source methodology.
Description: This book was the first to define and list design patterns in computer science.
Description: Statecharts are a visual modeling method. They are an extension of state machine that might be exponentially more efficient. Therefore, statcharts enable formal modeling of applications that were too complex before. Statecharts are part of the UML diagrams.
Topics covered: theoretical computer science, including computability theory, computational complexity theory, algorithms, algorithmic information theory, information theory and formal verification.